21 research outputs found
Time stepping free numerical solution of linear differential equations: Krylov subspace versus waveform relaxation
The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation method based on block Krylov subspaces. Second, we compare this new implementation against Krylov subspace methods combined with the shift and invert technique
One-site density matrix renormalization group and alternating minimum energy algorithm
Given in the title are two algorithms to compute the extreme eigenstate of a
high-dimensional Hermitian matrix using the tensor train (TT) / matrix product
states (MPS) representation. Both methods empower the traditional alternating
direction scheme with the auxiliary (e.g. gradient) information, which
substantially improves the convergence in many difficult cases. Being
conceptually close, these methods have different derivation, implementation,
theoretical and practical properties. We emphasize the differences, and
reproduce the numerical example to compare the performance of two algorithms.Comment: Submitted to the proceedings of ENUMATH 201
Tensor completion in hierarchical tensor representations
Compressed sensing extends from the recovery of sparse vectors from
undersampled measurements via efficient algorithms to the recovery of matrices
of low rank from incomplete information. Here we consider a further extension
to the reconstruction of tensors of low multi-linear rank in recently
introduced hierarchical tensor formats from a small number of measurements.
Hierarchical tensors are a flexible generalization of the well-known Tucker
representation, which have the advantage that the number of degrees of freedom
of a low rank tensor does not scale exponentially with the order of the tensor.
While corresponding tensor decompositions can be computed efficiently via
successive applications of (matrix) singular value decompositions, some
important properties of the singular value decomposition do not extend from the
matrix to the tensor case. This results in major computational and theoretical
difficulties in designing and analyzing algorithms for low rank tensor
recovery. For instance, a canonical analogue of the tensor nuclear norm is
NP-hard to compute in general, which is in stark contrast to the matrix case.
In this book chapter we consider versions of iterative hard thresholding
schemes adapted to hierarchical tensor formats. A variant builds on methods
from Riemannian optimization and uses a retraction mapping from the tangent
space of the manifold of low rank tensors back to this manifold. We provide
first partial convergence results based on a tensor version of the restricted
isometry property (TRIP) of the measurement map. Moreover, an estimate of the
number of measurements is provided that ensures the TRIP of a given tensor rank
with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its
Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral
Comparison of some Reduced Representation Approximations
In the field of numerical approximation, specialists considering highly
complex problems have recently proposed various ways to simplify their
underlying problems. In this field, depending on the problem they were tackling
and the community that are at work, different approaches have been developed
with some success and have even gained some maturity, the applications can now
be applied to information analysis or for numerical simulation of PDE's. At
this point, a crossed analysis and effort for understanding the similarities
and the differences between these approaches that found their starting points
in different backgrounds is of interest. It is the purpose of this paper to
contribute to this effort by comparing some constructive reduced
representations of complex functions. We present here in full details the
Adaptive Cross Approximation (ACA) and the Empirical Interpolation Method (EIM)
together with other approaches that enter in the same category
Iterative across-time solution of linear differential equations: Krylov subspace versus waveform relaxation
The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation (WR) method based on block Krylov subspaces. Second, we compare this new WR-Krylov implementation against Krylov subspace methods combined with the shift and invert (SAI) technique. Some analysis and numerical experiments are presented. Since the WR-Krylov and SAI-Krylov methods build up the solution simultaneously for the whole time interval and there is no time stepping involved, both methods can be seen as iterative across-time methods. The key difference between these methods and standard time integration methods is that their accuracy is not directly related to the time step size
How to optimize preconditioners for the conjugate gradient method: a stochastic approach
Abstract:
The conjugate gradient method (CG) is usually used with a preconditioner which improves efficiency and robustness of the method. Many preconditioners include parameters and a proper choice of a preconditioner and its parameters is often not a trivial task. Although many convergence estimates exist which can be used for optimizing preconditioners, they typically hold for all initial guess vectors, reflecting the worst convergence rate. To account for the mean convergence rate instead, in this paper, we follow a simple stochastic approach. It is based on trial runs with random initial guess vectors and leads to a functional which can be used to monitor convergence and to optimize preconditioner parameters in CG. Presented numerical experiments show that optimization of this new functional usually yields a better parameter value than optimization of the functional based on the spectral condition number.Note:
Research direction:Programming, parallel computing, multimedi